1 The data

In smoking_status Unknown should be changed to NA.

Also, it can be ordered: never < formerly < smokes

Other predictors seem to be OK

df <- read_csv("data/healthcare-dataset-stroke-data.csv", col_types = "cfdfffffddcf", na = c("Unknown", "N/A"))
# if you set smoking_status to factor in col_types, na() won't work
df$smoking_status <- as_factor(df$smoking_status)
df$smoking_status <- fct_relevel(df$smoking_status, c("never smoked", "formerly smoked", "smokes"))
df$stroke <- factor(ifelse(df$stroke == 1, "yes", "no"), levels = c("no", "yes"))

df

1.1 skimr description

Skip id column

df$id <- NULL
skimr::skim_to_wide(df)
Data summary
Name Piped data
Number of rows 5110
Number of columns 11
_______________________
Column type frequency:
factor 8
numeric 3
________________________
Group variables None

Variable type: factor

skim_variable n_missing complete_rate ordered n_unique top_counts
gender 0 1.0 FALSE 3 Fem: 2994, Mal: 2115, Oth: 1
hypertension 0 1.0 FALSE 2 0: 4612, 1: 498
heart_disease 0 1.0 FALSE 2 0: 4834, 1: 276
ever_married 0 1.0 FALSE 2 Yes: 3353, No: 1757
work_type 0 1.0 FALSE 5 Pri: 2925, Sel: 819, chi: 687, Gov: 657
Residence_type 0 1.0 FALSE 2 Urb: 2596, Rur: 2514
smoking_status 1544 0.7 FALSE 3 nev: 1892, for: 885, smo: 789
stroke 0 1.0 FALSE 2 no: 4861, yes: 249

Variable type: numeric

skim_variable n_missing complete_rate mean sd p0 p25 p50 p75 p100 hist
age 0 1.00 43.23 22.61 0.08 25.00 45.00 61.00 82.00 ▅▆▇▇▆
avg_glucose_level 0 1.00 106.15 45.28 55.12 77.24 91.88 114.09 271.74 ▇▃▁▁▁
bmi 201 0.96 28.89 7.85 10.30 23.50 28.10 33.10 97.60 ▇▇▁▁▁

Target ‘stroke’ is imbalanced!

Smoking’s complete rate 0.7

1.2 How many smoking_status in each target class?

df %>% group_by(stroke, smoking_status) %>% summarise(N=n())

BMI’s complete rate 0.96

1.3 How many skipped BMI in each target class?

df %>% filter(is.na(bmi)) %>% group_by(stroke) %>% summarise(N=n())

One ‘Other’ gender to be removed

df <- df %>% filter(gender != "Other")

1.4 EDA

1.4.1 Overview: a pairs plot

GGally::ggpairs(df, aes(color = stroke, alpha = 0.2, dotsize = 0.02), 
        upper = list(continuous = GGally::wrap("cor", size = 2.5)),
        diag = list(continuous = "barDiag")) +
  scale_color_brewer(palette = "Set1", direction = -1) +
  scale_fill_brewer(palette = "Set1", direction = -1)

1.4.2 In details

1.4.2.1 Stroke vs Age

ggplot(df, aes(stroke, age)) +
  geom_boxplot(aes(fill = stroke), alpha = 0.5, varwidth = T, notch = T) +
  geom_violin(aes(fill = stroke), alpha = 0.5) +
  scale_fill_brewer(palette = "Set1", direction = -1) +
  xlab("")

OBS! There are observation with age much below 15 y.o., even close to 0!

It is super unrealistic - see the plot below

1.4.2.2 Stroke vs Age + Gender

ggplot(df, aes(stroke, age)) + 
  geom_violin(alpha=0.3) +
  geom_jitter(alpha=0.2, size=0.8, width = 0.15, height = 0.1, aes(color = gender)) + 
  geom_boxplot(alpha = 0.2) +
  scale_color_brewer(palette = "Set2", direction = -1)

Look how many observations we have under 20 y.o. - and at the same time just two among strokes

Remove everyone under 15-20 y.o.?

1.4.2.3 Stroke vs BMI

ggplot(df, aes(stroke, bmi)) +
  geom_boxplot(aes(fill = stroke), alpha = 0.5, varwidth = T, notch = T) +
  geom_violin(aes(fill = stroke), alpha = 0.5) +
  scale_fill_brewer(palette = "Set1", direction = -1) +
  xlab("")

BMI over 40 is the 3rd class of obesity - BMI over 75 should exist at all.

Also, unrealistic outliers.

Let’s look at this weird points

1.4.2.4 Age vs BMI vs Glucose

ggplot(df, aes(age, bmi)) +
  geom_point(aes(color = avg_glucose_level), alpha = 0.4, size = 0.5) +
  scale_fill_brewer(palette = "Set1", direction = -1) +
  facet_grid(rows = vars(stroke)) +
  guides()

1.4.2.5 Glucose vs Age

ggplot(df, aes(age, avg_glucose_level)) +
  geom_point(aes(color = smoking_status), alpha = 0.6, size = 1) +
  scale_fill_brewer(palette = "Set1", direction = -1) +
  facet_grid(rows = vars(stroke)) +
  guides()

OBS! Kids are mainly ‘Unknown’ smoking status; both target groups are divided into two clusters – I’m curious why.

1.4.2.6 Stroke vs Gender

gender <- df %>% group_by(stroke, gender) %>% summarize(N=n())

ggplot(gender, aes(stroke, N)) +
  geom_bar(aes(fill=gender), alpha = 0.8, stat = "identity", position = "fill")+
  scale_fill_brewer(palette = "Set2", direction = -1)+
  ylab("proportion")

Proportions in both stroke groups are roughly the same

1.4.2.7 Stroke vs Hypertension

hyptens <- df %>% group_by(stroke, hypertension) %>% summarize(N=n())

hyptens
ggplot(hyptens, aes(stroke, N)) +
  geom_bar(aes(fill=hypertension), alpha = 0.8, stat = "identity", position = "fill")+
  scale_fill_brewer(palette = "Set2", direction = -1)+
  ylab("proportion")

Hypertension occurred more often among stroke-yes

ggplot(df, aes(stroke, avg_glucose_level)) +
  geom_boxplot(aes(fill = stroke), alpha = 0.5, varwidth = T, notch = T) +
  geom_violin(aes(fill = stroke), alpha = 0.5) +
  scale_fill_brewer(palette = "Set1", direction = -1) +
  xlab("")

1.5 Imputation

Using package mice

It uses polr - proportional odds model - for smoking_status and pmm - predictive mean matching - for bmi

1.5.1 Run imputation

library(mice)

imp_mice <- mice(df)
## 
##  iter imp variable
##   1   1  bmi  smoking_status
##   1   2  bmi  smoking_status
##   1   3  bmi  smoking_status
##   1   4  bmi  smoking_status
##   1   5  bmi  smoking_status
##   2   1  bmi  smoking_status
##   2   2  bmi  smoking_status
##   2   3  bmi  smoking_status
##   2   4  bmi  smoking_status
##   2   5  bmi  smoking_status
##   3   1  bmi  smoking_status
##   3   2  bmi  smoking_status
##   3   3  bmi  smoking_status
##   3   4  bmi  smoking_status
##   3   5  bmi  smoking_status
##   4   1  bmi  smoking_status
##   4   2  bmi  smoking_status
##   4   3  bmi  smoking_status
##   4   4  bmi  smoking_status
##   4   5  bmi  smoking_status
##   5   1  bmi  smoking_status
##   5   2  bmi  smoking_status
##   5   3  bmi  smoking_status
##   5   4  bmi  smoking_status
##   5   5  bmi  smoking_status
df_imp <- complete(imp_mice)

Number of NAs in BMI: 0

Number of NAs in Smoking 0

1.5.2 Check distributions

1.5.2.1 BMI

bmi_imp_comp <- bind_rows(select(df, bmi, stroke) %>% mutate(type = rep("original", nrow(df))),
          select(df_imp, bmi, stroke) %>% mutate(type = rep("imputed", nrow(df_imp))))

ggplot(bmi_imp_comp, aes(bmi)) +
  geom_histogram(aes(fill=type), alpha=0.8) +
  facet_grid(cols = vars(stroke))

Means have not changed, which is good, I suppose.

1.5.2.2 Smoking

smoke_imp_comp <- bind_rows(select(df, smoking_status, stroke) %>% mutate(type = rep("original", nrow(df))),
          select(df_imp, smoking_status, stroke) %>% mutate(type = rep("imputed", nrow(df_imp))))

ggplot(smoke_imp_comp, aes(smoking_status)) +
  geom_bar(aes(fill=type), alpha=0.8, position="dodge") +
  facet_grid(cols = vars(stroke)) +
  xlab("")+
  theme(axis.text.x = element_text(angle=45, vjust = 0.5))

Counts increased proportionally in all Smoking groups

1.6 Scaling & Normalization

Scale numeric features (including imputed BMI)

# use caret::preProcess()
# preProcValues <- preProcess(training, method = c("center", "scale"))

df_scaled <- df_imp %>% 
  select(avg_glucose_level, age, bmi) %>% 
  scale() %>% 
  data.frame()

1.7 Make Dummies

I’ve decided to omit smoking_status completely - it won’t be dummified

# select vars
to_dum <- df_imp %>% select(gender, ever_married, work_type, Residence_type, smoking_status)
# make an obj
dummies <- dummyVars(~ ., data=to_dum)
# apply it
df_dummy <- data.frame(predict(dummies, newdata=to_dum))

head(df_dummy)

1.8 Join scaled and dummies and the rest

df_proc <- bind_cols(df_scaled, df_dummy, select(df, hypertension, heart_disease, stroke))
head(df_proc)

2 Modelling

2.1 Params tuning

ROC-optimization is better when data is imbalanced

Kappa-optimization is also good

# for ROC
fit_ctrl_roc <- trainControl(## 5-fold CV
                           method = "repeatedcv",
                           number = 5,
                           repeats = 10, 
                           allowParallel = T,
                           classProbs = T,
                           summaryFunction = twoClassSummary)
# for kappa
fit_ctrl_kp <- trainControl(## 5-fold CV
                           method = "repeatedcv",
                           number = 5,
                           repeats = 10, 
                           allowParallel = T)

fit_ctrl_kp10 <- trainControl(## 10-fold CV
                           method = "repeatedcv",
                           number = 10,
                           repeats = 10, 
                           allowParallel = T)

2.2 Split data

Imbalanced data - use SMOTE to create training data set, but not testing data set

set.seed(1234)
sample_set <- createDataPartition(y = df_proc$stroke, p = .75, list = FALSE)
df_train <- df_proc[sample_set,]
df_test <- df_proc[-sample_set,]

# DMwR::SMOTE for imbalanced data: over=225 and under=150 give me 1:1 ratio
df_train_smote <- SMOTE(stroke ~ ., data.frame(df_train), perc.over = 1725, perc.under = 106)

df_train_smote %>% group_by(stroke) %>% summarise(N=n())

3 Random Forest

3.1 Training and validation

3.1.1 Kappa-optimized

For imbalanced classes

library(doParallel)

cl <- makePSOCKcluster(THREADS)
registerDoParallel(cl)

set.seed(123)

fit_rf <- train(stroke ~ ., 
                 data = df_train_smote, 
                 metric = "Kappa", 
                 method = "rf", 
                 trControl = fit_ctrl_kp,
                 tuneGrid = expand.grid(.mtry = seq(2, 6, 0.5)), # I've tried all values greater than these
                 verbosity = 0,
                 verbose = FALSE)
stopCluster(cl)

fit_rf
## Random Forest 
## 
## 6735 samples
##   20 predictor
##    2 classes: 'no', 'yes' 
## 
## No pre-processing
## Resampling: Cross-Validated (5 fold, repeated 10 times) 
## Summary of sample sizes: 5388, 5388, 5387, 5389, 5388, 5389, ... 
## Resampling results across tuning parameters:
## 
##   mtry  Accuracy   Kappa    
##   2.0   0.9630585  0.9261154
##   2.5   0.9627021  0.9254026
##   3.0   0.9663099  0.9326183
##   3.5   0.9687895  0.9375775
##   4.0   0.9684925  0.9369834
##   4.5   0.9686855  0.9373695
##   5.0   0.9691161  0.9382308
##   5.5   0.9690865  0.9381716
##   6.0   0.9690420  0.9380826
## 
## Kappa was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 5.

3.1.2 ROC-optimized

cl <- makePSOCKcluster(THREADS)
registerDoParallel(cl)

set.seed(120)

fit_rf_roc <- train(stroke ~ ., 
                 data = df_train_smote, 
                 metric = "ROC", 
                 method = "rf", 
                 trControl = fit_ctrl_roc,
                 tuneGrid = expand.grid(.mtry = seq(2, 6, 0.5)),
                 verbosity = 0,
                 verbose = FALSE)
stopCluster(cl)

fit_rf_roc
## Random Forest 
## 
## 6735 samples
##   20 predictor
##    2 classes: 'no', 'yes' 
## 
## No pre-processing
## Resampling: Cross-Validated (5 fold, repeated 10 times) 
## Summary of sample sizes: 5389, 5387, 5388, 5388, 5388, 5388, ... 
## Resampling results across tuning parameters:
## 
##   mtry  ROC        Sens       Spec     
##   2.0   0.9892145  0.9856338  0.9405227
##   2.5   0.9892290  0.9851886  0.9404929
##   3.0   0.9913162  0.9899080  0.9422163
##   3.5   0.9923429  0.9925198  0.9435529
##   4.0   0.9922855  0.9928168  0.9435828
##   4.5   0.9922846  0.9926386  0.9435529
##   5.0   0.9927969  0.9924602  0.9451573
##   5.5   0.9930375  0.9914215  0.9461675
##   6.0   0.9930317  0.9915700  0.9462864
## 
## ROC was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 5.5.

3.2 Features importance

3.2.1 Kappa-optimized model

imp_vars_rf <- varImp(fit_rf)

plot(imp_vars_rf, main = "Variable Importance with RF")

3.2.2 ROC-optimized model

it’s the same

3.3 Testing

3.3.1 ROC & AUC

a function for roc-stuff

get_roc <- function(fit.obj, testing.df){
  pred_prob <- predict.train(fit.obj, newdata = testing.df, type = "prob")
  pred_roc <- prediction(predictions = pred_prob$yes, labels = testing.df$stroke)
  perf_roc <- performance(pred_roc, measure = "tpr", x.measure = "fpr")
  return(list(perf_roc, pred_roc))
}

3.3.1.1 ROC-curve for kappa-optimized model

# calculate ROC
perf_pred <- get_roc(fit_rf, df_test)
perf_rf <- perf_pred[[1]]
pred_rf <- perf_pred[[2]]

# take AUC 
auc_rf <- round(unlist(slot(performance(pred_rf, measure = "auc"), "y.values")), 3)

# plot
plot(perf_rf, main = "RF-k ROC curve", col = "steelblue", lwd = 3)
abline(a = 0, b = 1, lwd = 3, lty = 2, col = 1)
legend(x = 0.7, y = 0.3, legend = paste0("AUC = ", auc_rf))

3.3.1.2 ROC-curve for ROC-optimized model

# calculate ROC
perf_pred_roc <- get_roc(fit_rf_roc, df_test)
perf_rf_roc <- perf_pred_roc[[1]]
pred_rf_roc <- perf_pred_roc[[2]]

# take AUC 
auc_rf_roc <- round(unlist(slot(performance(pred_rf_roc, measure = "auc"), "y.values")), 3)

# plot
plot(perf_rf_roc, main = "RF-r ROC curve", col = "steelblue", lwd = 3)
abline(a = 0, b = 1, lwd = 3, lty = 2, col = 1)
legend(x = 0.7, y = 0.3, legend = paste0("AUC = ", auc_rf_roc))

So, we can adjust TPR/FPR cutoff to predict all strokes

3.3.2 TPR, FPR vs Probability cutoff

At which probability cut-off, you’ll get TPR = 1.0?

# use pred_rf (pred_roc) object
plot(performance(pred_rf, measure = "tpr", x.measure = "cutoff"),
     col="steelblue", 
     ylab = "Rate", 
     xlab="Probability cutoff")

plot(performance(pred_rf, measure = "fpr", x.measure = "cutoff"), 
     add = T, col = "red")

legend(x = 0.6,y = 0.7, c("TPR (Recall)", "FPR (1-Spec)"), 
       lty = 1, col =c('steelblue', 'red'), bty = 'n', cex = 1, lwd = 2)

#abline(v = 0.02, lwd = 2, lty=6)

title("RF-k")

Vertical line at cutoff = 0.02 designates maximum TPR and maximum FPR. Ideal cutoff should be to the left of this line

# use pred_rf (pred_roc) object
plot(performance(pred_rf_roc, measure = "tpr", x.measure = "cutoff"),
     col = "steelblue", 
     ylab = "Rate", 
     xlab = "Probability cutoff")

plot(performance(pred_rf_roc, measure = "fpr", x.measure = "cutoff"), 
     add = T, col = "red")

legend(x = 0.6,y = 0.7, c("TPR (Recall)", "FPR (1-Spec)"), 
       lty = 1, col = c('steelblue', 'red'), bty = 'n', cex = 1, lwd = 2)

#abline(v = 0.02, lwd = 2, lty=6)

title("RF-r")

Vertical line at 0.02

3.3.3 Confusion matrix

3.3.3.1 Kappa-optimized

Using desired cut-off: we want to maximize TPR (Sensitivity, Recall)!

According to the TPR/FPR plot (above) the optimal cutoff is

# predict probabilities
pred_prob_rf <- predict(fit_rf, newdata=df_test, type = "prob")

# choose your cut-off
cutoff = 0.01

# turn probabilities into classes
pred_class_rf <- ifelse(pred_prob_rf$yes > cutoff, "yes", "no")

pred_class_rf <- as.factor(pred_class_rf)

cm_rf <- confusionMatrix(data = pred_class_rf, 
                reference = df_test$stroke,
                mode = "everything",
                positive = "yes")

cm_rf
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  no yes
##        no  492   2
##        yes 723  60
##                                         
##                Accuracy : 0.4323        
##                  95% CI : (0.4049, 0.46)
##     No Information Rate : 0.9514        
##     P-Value [Acc > NIR] : 1             
##                                         
##                   Kappa : 0.0572        
##                                         
##  Mcnemar's Test P-Value : <2e-16        
##                                         
##             Sensitivity : 0.96774       
##             Specificity : 0.40494       
##          Pos Pred Value : 0.07663       
##          Neg Pred Value : 0.99595       
##               Precision : 0.07663       
##                  Recall : 0.96774       
##                      F1 : 0.14201       
##              Prevalence : 0.04855       
##          Detection Rate : 0.04699       
##    Detection Prevalence : 0.61316       
##       Balanced Accuracy : 0.68634       
##                                         
##        'Positive' Class : yes           
## 

3.3.3.2 ROC-optimized

# predict probabilities
pred_prob_rf_roc <- predict(fit_rf_roc, newdata = df_test, type = "prob")

# choose your cut-off
cutoff = 0.01

# turn probabilities into classes
pred_class_rf_roc <- ifelse(pred_prob_rf_roc$yes > cutoff, "yes", "no")

pred_class_rf_roc <- as.factor(pred_class_rf_roc)

cm_rf <- confusionMatrix(data = pred_class_rf_roc, 
                reference = df_test$stroke,
                mode = "everything",
                positive = "yes")

cm_rf
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  no yes
##        no  486   1
##        yes 729  61
##                                         
##                Accuracy : 0.4283        
##                  95% CI : (0.401, 0.456)
##     No Information Rate : 0.9514        
##     P-Value [Acc > NIR] : 1             
##                                         
##                   Kappa : 0.0584        
##                                         
##  Mcnemar's Test P-Value : <2e-16        
##                                         
##             Sensitivity : 0.98387       
##             Specificity : 0.40000       
##          Pos Pred Value : 0.07722       
##          Neg Pred Value : 0.99795       
##               Precision : 0.07722       
##                  Recall : 0.98387       
##                      F1 : 0.14319       
##              Prevalence : 0.04855       
##          Detection Rate : 0.04777       
##    Detection Prevalence : 0.61864       
##       Balanced Accuracy : 0.69194       
##                                         
##        'Positive' Class : yes           
## 

4 AdaBoost

4.1 Training and validation

4.1.1 Kappa-optimized

10-fold CV

set.seed(122)

cl <- makePSOCKcluster(THREADS)
registerDoParallel(cl)

fit_adb <- train(stroke ~ ., 
                 data = df_train_smote, 
                 metric = "Kappa", 
                 method = "AdaBoost.M1", 
                 trControl = fit_ctrl_kp10,
                 tuneGrid = expand.grid(.maxdepth = seq(10, 20, 2), .mfinal = seq(150, 180, 10), .coeflearn = c("Freund")),
                 verbosity = 0,
                 verbose = FALSE)

# coeflearn=Freund was chosen by automatic grid search, mfinal choice comes from there too

stopCluster(cl)

fit_adb
## AdaBoost.M1 
## 
## 6735 samples
##   20 predictor
##    2 classes: 'no', 'yes' 
## 
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 10 times) 
## Summary of sample sizes: 6061, 6061, 6062, 6061, 6061, 6061, ... 
## Resampling results across tuning parameters:
## 
##   maxdepth  mfinal  Accuracy   Kappa    
##   10        150     0.9705116  0.9410224
##   10        160     0.9705858  0.9411708
##   10        170     0.9704673  0.9409338
##   10        180     0.9706008  0.9412008
##   12        150     0.9715958  0.9431909
##   12        160     0.9714175  0.9428343
##   12        170     0.9716998  0.9433989
##   12        180     0.9715661  0.9431314
##   14        150     0.9715806  0.9431605
##   14        160     0.9714914  0.9429821
##   14        170     0.9717588  0.9435168
##   14        180     0.9719371  0.9438735
##   16        150     0.9720412  0.9440816
##   16        160     0.9719669  0.9439331
##   16        170     0.9720116  0.9440223
##   16        180     0.9719374  0.9438741
##   18        150     0.9716107  0.9432207
##   18        160     0.9715513  0.9431018
##   18        170     0.9715959  0.9431910
##   18        180     0.9715958  0.9431909
##   20        150     0.9716547  0.9433087
##   20        160     0.9715508  0.9431008
##   20        170     0.9714321  0.9428634
##   20        180     0.9717291  0.9434573
## 
## Tuning parameter 'coeflearn' was held constant at a value of Freund
## Kappa was used to select the optimal model using the largest value.
## The final values used for the model were mfinal = 150, maxdepth = 16
##  and coeflearn = Freund.

4.2 Testing

4.2.1 ROC curve

# calculate ROC
perf_pred_adb <- get_roc(fit_adb, df_test)
perf_adb <- perf_pred_adb[[1]]
pred_adb <- perf_pred_adb[[2]]

# take AUC 
auc_adb <- round(unlist(slot(performance(pred_adb, measure = "auc"), "y.values")), 3)

# plot
plot(perf_adb, main = "AdaBoost ROC curve", col = "steelblue", lwd = 3)
abline(a = 0, b = 1, lwd = 3, lty = 2, col = 1)
legend(x = 0.7, y = 0.3, legend = paste0("AUC = ", auc_adb))

4.2.2 TPR, FPR vs Probability cutoff

At which probability cut-off, you’ll get TPR = 1.0?

# use pred_rf (pred_roc) object
plot(performance(pred_adb, measure = "tpr", x.measure = "cutoff"),
     col="steelblue", 
     ylab = "Rate", 
     xlab="Probability cutoff")

plot(performance(pred_adb, measure = "fpr", x.measure = "cutoff"), 
     add = T, col = "red")

legend(x = 0.6,y = 0.7, c("TPR (Recall)", "FPR (1-Spec)"), 
       lty = 1, col =c('steelblue', 'red'), bty = 'n', cex = 1, lwd = 2)

#abline(v = 0.1, lwd = 2, lty=6)

title("AdaBoost.M1")

4.2.3 Confusion matrix

pred_prob_adb <- predict(fit_adb, newdata=df_test, type = "prob")

# choose your cut-off
cutoff = 0.12

# turn probabilities into classes
pred_class_adb <- ifelse(pred_prob_adb$yes > cutoff, "yes", "no")

pred_class_adb <- as.factor(pred_class_adb)

cm_adb <- confusionMatrix(data = pred_class_adb, 
                reference = df_test$stroke,
                mode="everything",
                positive="yes")

cm_adb
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction  no yes
##        no  345   1
##        yes 870  61
##                                           
##                Accuracy : 0.3179          
##                  95% CI : (0.2924, 0.3443)
##     No Information Rate : 0.9514          
##     P-Value [Acc > NIR] : 1               
##                                           
##                   Kappa : 0.035           
##                                           
##  Mcnemar's Test P-Value : <2e-16          
##                                           
##             Sensitivity : 0.98387         
##             Specificity : 0.28395         
##          Pos Pred Value : 0.06552         
##          Neg Pred Value : 0.99711         
##               Precision : 0.06552         
##                  Recall : 0.98387         
##                      F1 : 0.12286         
##              Prevalence : 0.04855         
##          Detection Rate : 0.04777         
##    Detection Prevalence : 0.72905         
##       Balanced Accuracy : 0.63391         
##                                           
##        'Positive' Class : yes             
## 

5 Extreme Gradient Boosting: xgbTree

xgbTree has 7 parameters

5.1 Training and validation

5.1.1 Kappa-optimized

10-fold CV

set.seed(121)

fit_xgb_kp <- train(stroke ~ ., 
                 data = df_train_smote, 
                 method = "xgbTree",
                 metric = "Kappa", 
                 trControl = fit_ctrl_kp10,
                 tuneGrid = expand.grid(
                   .nrounds = 100,
                   .max_depth = seq(3, 15, 1),
                   .eta = 0.3,
                   .gamma = 0.01,
                   .colsample_bytree = 1,
                   .min_child_weight = 1,
                   .subsample = 1
                 ),
                 verbose = FALSE,
                 verbosity = 0)

fit_xgb_kp$bestTune

5.2 Features importance

imp_vars_xgb <- varImp(fit_xgb_kp)

plot(imp_vars_xgb, main = "Variable Importance with XGB")

5.3 Testing

5.3.1 ROC curve

# calculate ROC
perf_pred_xgb <- get_roc(fit_xgb_kp, df_test)
perf_xgb <- perf_pred_xgb[[1]]
pred_xgb <- perf_pred_xgb[[2]]


# take AUC 
auc_xgb <- round(unlist(slot(performance(pred_xgb, measure = "auc"), "y.values")), 3)

# plot
plot(perf_xgb, main = "XGB ROC curve", col = "steelblue", lwd = 3)
abline(a = 0, b = 1, lwd = 3, lty = 2, col = 1)
legend(x = 0.7, y = 0.3, legend = paste0("AUC = ", auc_xgb))

5.3.2 TPR v FPR

# use pred_xgb object
plot(performance(pred_xgb, measure = "tpr", x.measure = "cutoff"),
     col = "steelblue", 
     ylab = "Rate", 
     xlab = "Probability cutoff")

plot(performance(pred_xgb, measure = "fpr", x.measure = "cutoff"), 
     add = T, col = "red")

legend(x = 0.6,y = 0.7, c("TPR (Recall)", "FPR (1-Spec)"), 
       lty = 1, col = c('steelblue', 'red'), bty = 'n', cex = 1, lwd = 2)

#abline(v = 0.1, lwd = 2, lty=6)

title("xgbTree")

5.3.3 Confusion matrix

pred_prob_xgb <- predict(fit_xgb_kp, newdata=df_test, type = "prob")

# choose your cut-off
cutoff = 0.12

# turn probabilities into classes
pred_class_xgb <- ifelse(pred_prob_xgb$yes > cutoff, "yes", "no")

pred_class_xgb <- as.factor(pred_class_xgb)

cm_xgb <- confusionMatrix(data = pred_class_xgb, 
                reference = df_test$stroke,
                mode = "everything",
                positive = "yes")

cm_xgb
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction   no  yes
##        no  1065   41
##        yes  150   21
##                                           
##                Accuracy : 0.8504          
##                  95% CI : (0.8297, 0.8696)
##     No Information Rate : 0.9514          
##     P-Value [Acc > NIR] : 1               
##                                           
##                   Kappa : 0.1174          
##                                           
##  Mcnemar's Test P-Value : 5.514e-15       
##                                           
##             Sensitivity : 0.33871         
##             Specificity : 0.87654         
##          Pos Pred Value : 0.12281         
##          Neg Pred Value : 0.96293         
##               Precision : 0.12281         
##                  Recall : 0.33871         
##                      F1 : 0.18026         
##              Prevalence : 0.04855         
##          Detection Rate : 0.01644         
##    Detection Prevalence : 0.13391         
##       Balanced Accuracy : 0.60763         
##                                           
##        'Positive' Class : yes             
## 

6 Save the workspace


save.image("data/workspace.RData")